Goto

Collaborating Authors

 incremental constituency parsing


Strongly Incremental Constituency Parsing with Graph Neural Networks

Neural Information Processing Systems

Parsing sentences into syntax trees can benefit downstream applications in NLP. Transition-based parsers build trees by executing actions in a state transition system. They are computationally efficient, and can leverage machine learning to predict actions based on partial trees. However, existing transition-based parsers are predominantly based on the shift-reduce transition system, which does not align with how humans are known to parse sentences. Psycholinguistic research suggests that human parsing is strongly incremental--humans grow a single parse tree by adding exactly one token at each step.


Supplementary Material: Strongly Incremental Constituency Parsing with Graph Neural Networks

Neural Information Processing Systems

The root node must be an internal node. Now we are ready to state and prove Theorem 1 and Theorem 2 in the main paper. We prove the correctness of Algorithm 1 by induction on the sentence length n . " to denote the execution trace taking the Case 1-1-- last_leaf has siblings, and last_subtree is the root node. We have last_subtree = last_leaf (the first conditional statement).


Review for NeurIPS paper: Strongly Incremental Constituency Parsing with Graph Neural Networks

Neural Information Processing Systems

Weaknesses: The following are my concerns (questions) and confirmations of the proposed method. I understand that the number of actions required to parse a sentence for the proposed method is n, where the number of tokens in the sentence is n . However, the computational cost for one action seems relatively very expensive comparing with the existing transition-based algorithm, such as standard shift-reduce parser. Therefore, I suspect that the actual runtime of parsing a single sentence takes much larger than the conventional methods. Regardless of my suspicion is correct or not, the experimental results in the current version does not answer this point.


Review for NeurIPS paper: Strongly Incremental Constituency Parsing with Graph Neural Networks

Neural Information Processing Systems

This is a borderline paper. The technical contribution is interesting and appreciated by the reviewers. The results match the state of the art on PTB and are better on CTB. There are, however, some concerns with the paper. One of the reviewers summarized it very well: "In its present form, the scope of the paper seems too narrow. It is also somewhat unclear whom the intended audience ought to be. If the work aims to say something about psycholinguistics, the experiment should reflect that. If the work's goal is to support NLP applications, further justifications and motivations should be provided as to how a strongly incremental constituency parser might be useful in a current NLP pipeline. If the work aims to shed lights on our understanding of GNN, the paper would need to be refocused accordingly."


Strongly Incremental Constituency Parsing with Graph Neural Networks

Neural Information Processing Systems

Parsing sentences into syntax trees can benefit downstream applications in NLP. Transition-based parsers build trees by executing actions in a state transition system. They are computationally efficient, and can leverage machine learning to predict actions based on partial trees. However, existing transition-based parsers are predominantly based on the shift-reduce transition system, which does not align with how humans are known to parse sentences. Psycholinguistic research suggests that human parsing is strongly incremental--humans grow a single parse tree by adding exactly one token at each step.